Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Slow earthquakes, including low-frequency earthquakes, tremor, and geodetically detected slow-slip events, have been widely detected, most commonly at depths of 40–60 km in active subduction zones around the Pacific Ocean Basin. Rocks exhumed from these depths allow us to search for structures that may initiate slow earthquakes. The evidence for high pore-fluid pressures in subduction zones suggests that they may be associated with hydraulic fractures (e.g., veins) and with metamorphic reactions that release or consume water. Loss of continuity and resulting slip at rates exceeding 10−4 m s–1 are required to produce the quasi-seismic signature of low-frequency earthquakes, but the subseismic displacement rates require that the slip rate is slowed by a viscous process, such as low permeability, limiting the rate at which fluid can access a propagating fracture. Displacements during individual low-frequency earthquakes are unlikely to exceed 1 mm, but they need to be more than 0.1 mm and act over an area of ~105 m2 to produce a detectable effective seismic moment. This limits candidate structures to those that have lateral dimensions of ~300 m and move in increments of <1 mm. Possible candidates include arrays of sheeted shear veins showing crack-seal structures; dilational arcs in microfold hinges that form crenulation cleavages; brittle-ductile shear zones in which the viscous component of deformation can limit the displacement rate during slow-slip events; slip surfaces coated with materials, such as chlorite or serpentine, that exhibit a transition from velocity-weakening to velocity-strengthening behavior with increasing slip velocity; and block-in-matrix mélanges.more » « less
-
Climate change is one of the greatest challenges facing humanity, and we, as machine learning (ML) experts, may wonder how we can help. Here we describe how ML can be a powerful tool in reducing greenhouse gas emissions and helping society adapt to a changing climate. From smart grids to disaster management, we identify high impact problems where existing gaps can be filled by ML, in collaboration with other fields. Our recommendations encompass exciting research questions as well as promising business opportunities. We call on the ML community to join the global effort against climate change.more » « less
-
null (Ed.)In recent years, enterprises have been targeted by advanced adversaries who leverage creative ways to infiltrate their systems and move laterally to gain access to critical data. One increasingly common evasive method is to hide the malicious activity behind a benign program by using tools that are already installed on user computers. These programs are usually part of the operating system distribution or another user-installed binary, therefore this type of attack is called “Living-Off-The-Land”. Detecting these attacks is challenging, as adversaries may not create malicious files on the victim computers and anti-virus scans fail to detect them. We propose the design of an Active Learning framework called LOLAL for detecting Living-Off-the-Land attacks that iteratively selects a set of uncertain and anomalous samples for labeling by a human analyst. LOLAL is specifically designed to work well when a limited number of labeled samples are available for training machine learning models to detect attacks. We investigate methods to represent command-line text using word-embedding techniques, and design ensemble boosting classifiers to distinguish malicious and benign samples based on the embedding representation. We leverage a large, anonymized dataset collected by an endpoint security product and demonstrate that our ensemble classifiers achieve an average F1 score of 96% at classifying different attack classes. We show that our active learning method consistently improves the classifier performance, as more training data is labeled, and converges in less than 30 iterations when starting with a small number of labeled instances.more » « less
-
Abstract Below the seismogenic zone, faults are expressed as zones of distributed ductile strain in which minerals deform chiefly by crystal plastic and diffusional processes. We present a case study from the Caledonian frontal thrust system in northwest Scotland to better constrain the geometry, internal structure, and rheology of a major zone of reverse-sense shear below the brittle-to-ductile transition (BDT). Rocks now exposed at the surface preserve a range of shear zone conditions reflecting progressive exhumation of the shear zone during deformation. Field-based measurements of structural distance normal to the Moine Thrust Zone, which marks the approximate base of the shear zone, together with microstructural observations of active slip systems and the mechanisms of deformation and recrystallization in quartz, are paired with quantitative estimates of differential stress, deformation temperature, and pressure. These are used to reconstruct the internal structure and geometry of the Scandian shear zone from ~10 to 20 km depth. We document a shear zone that localizes upwards from a thickness of >2.5 km to <200 m with temperature ranging from ~450–350°C and differential stress from 15–225 MPa. We use estimates of deformation conditions in conjunction with independently calculated strain rates to compare between experimentally derived constitutive relationships and conditions observed in naturally-deformed rocks. Lastly, pressure and converted shear stress are used to construct a crustal strength profile through this contractional orogen. We calculate a peak shear stress of ~130 MPa in the shallowest rocks which were deformed at the BDT, decreasing to <10 MPa at depths of ~20 km. Our results are broadly consistent with previous studies which find that the BDT is the strongest region of the crust.more » « less
-
Abstract We present a flow law for dislocation‐dominated creep in wet quartz derived from compiled experimental and field‐based rheological data. By integrating the field‐based data, including independently calculated strain rates, deformation temperatures, pressures, and differential stresses, we add constraints for dislocation‐dominated creep at conditions unattainable in quartz deformation experiments. A Markov Chain Monte Carlo (MCMC) statistical analysis computes internally consistent parameters for the generalized flow law: = Aσne−(Q+VP)/RT. From this initial analysis, we identify differenteffectivestress exponents for quartz deformed at confining pressures above and below ∼700 MPa. To minimize the possible effect of confining pressure, compiled data are separated into “low‐pressure” (<560 MPa) and “high‐pressure” (700–1,600 MPa) groups and reanalyzed using the MCMC approach. The “low‐pressure” data set, which is most applicable at midcrustal to lower‐crustal confining pressures, yields the following parameters: log(A) = −9.30 ± 0.66 MPa−n−r s−1;n = 3.5 ± 0.2;r = 0.49 ± 0.13;Q = 118 ± 5 kJ mol−1; andV = 2.59 ± 2.45 cm3 mol−1. The “high‐pressure” data set produces a different set of parameters: log(A) = −7.90 ± 0.34 MPa−n−r s−1;n = 2.0 ± 0.1;r = 0.49 ± 0.13;Q = 77 ± 8 kJ mol−1; andV = 2.59 ± 2.45 cm3 mol−1. Predicted quartz rheology is compared to other flow laws for dislocation creep; the calibrations presented in this study predict faster strain rates under geological conditions by more than 1 order of magnitude. The change innat high confining pressure may result from an increase in the activity of grain size sensitive creep.more » « less
-
Abstract Practical quantum computing will require error rates well below those achievable with physical qubits. Quantum error correction1,2offers a path to algorithmically relevant error rates by encoding logical qubits within many physical qubits, for which increasing the number of physical qubits enhances protection against physical errors. However, introducing more qubits also increases the number of error sources, so the density of errors must be sufficiently low for logical performance to improve with increasing code size. Here we report the measurement of logical qubit performance scaling across several code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find that our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, in terms of both logical error probability over 25 cycles and logical error per cycle ((2.914 ± 0.016)% compared to (3.028 ± 0.023)%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7 × 10−6logical error per cycle floor set by a single high-energy event (1.6 × 10−7excluding this event). We accurately model our experiment, extracting error budgets that highlight the biggest challenges for future systems. These results mark an experimental demonstration in which quantum error correction begins to improve performance with increasing qubit number, illuminating the path to reaching the logical error rates required for computation.more » « less
An official website of the United States government

Full Text Available